IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate
نویسندگان
چکیده
The problem of minimizing an objective that can be written as the sum of a set of n smooth and strongly convex functions is challenging because the cost of evaluating the function and its derivatives is proportional to the number of elements in the sum. The Incremental Quasi-Newton (IQN) method proposed here belongs to the family of stochastic and incremental methods that have a cost per iteration independent of n. IQN iterations are a stochastic version of BFGS iterations that use memory to reduce the variance of stochastic approximations. The method is shown to exhibit local superlinear convergence. The convergence properties of IQN bridge a gap between deterministic and stochastic quasi-Newton methods. Deterministic quasi-Newton methods exploit the possibility of approximating the Newton step using objective gradient differences. They are appealing because they have a smaller computational cost per iteration relative to Newton’s method and achieve a superlinear convergence rate under customary regularity assumptions. Stochastic quasi-Newton methods utilize stochastic gradient differences in lieu of actual gradient differences. This makes their computational cost per iteration independent of the number of objective functions n. However, existing stochastic quasi-Newton methods have sublinear or linear convergence at best. IQN is the first stochastic quasiNewton method proven to converge superlinearly in a local neighborhood of the optimal solution. IQN differs from state-of-the-art incremental quasi-Newton methods in three aspects: (i) The use of aggregated information of variables, gradients, and quasi-Newton Hessian approximation matrices to reduce the noise of gradient and Hessian approximations. (ii) The approximation of each individual function by its Taylor’s expansion in which the linear and quadratic terms are evaluated with respect to the same iterate. (iii) The use of a cyclic scheme to update the functions in lieu of a random selection routine. We use these fundamental properties of IQN to establish its local superlinear convergence rate. The presented numerical experiments match our theoretical results and justify the advantage of IQN relative to other incremental methods.
منابع مشابه
Local Convergence Theory of Inexact Newton Methods Based on Structured Least Change Updates
In this paper we introduce a local convergence theory for Least Change Secant Update methods. This theory includes most known methods of this class, as well as some new interesting quasi-Newton methods. Further, we prove that this class of LCSU updates may be used to generate iterative linear methods to solve the Newton linear equation in the Inexact-Newton context. Convergence at a ¡j-superlin...
متن کاملA Superlinearly-Convergent Proximal Newton-type Method for the Optimization of Finite Sums
We consider the problem of optimizing the strongly convex sum of a finite number of convex functions. Standard algorithms for solving this problem in the class of incremental/stochastic methods have at most a linear convergence rate. We propose a new incremental method whose convergence rate is superlinear – the Newtontype incremental method (NIM). The idea of the method is to introduce a model...
متن کاملQ - Superlinear Convergence O F Primal - Dual Interior Point Quasi - Newton Methods F O R Constrained Optimization
This paper analyzes local convergence rates of primal-dual interior point methods for general nonlinearly constrained optimization problems. For this purpose, we first discuss modified Newton methods and modified quasi-Newton methods for solving a nonlinear system of equations, and show local and Qquadratic/Q-superlinear convergence of these methods. These methods are characterized by a perturb...
متن کاملLocal convergence of quasi-Newton methods under metric regularity
We consider quasi-Newton methods for generalized equations in Banach spaces under metric regularity and give a sufficient condition for q-linear convergence. Then we show that the well-known Broyden update satisfies this sufficient condition in Hilbert spaces. We also establish various modes of q-superlinear convergence of the Broyden update under strong metric subregularity, metric regularity ...
متن کاملProximal Quasi-Newton Methods for Nondifferentiable Convex Optimization
Some global convergence properties of a variable metric algorithm for minimization without exact line searches, in R. 23 superlinear convergent algorithm for minimizing the Moreau-Yosida regularization F. However, this algorithm makes use of the generalized Jacobian of F, instead of matrices B k generated by a quasi-Newton formula. Moreover, the line search is performed on the function F , rath...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/1702.00709 شماره
صفحات -
تاریخ انتشار 2017